Re: Smart fetch via HTTP?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 5/16/07, Johannes Schindelin <Johannes.Schindelin@xxxxxx> wrote:
On Wed, 16 May 2007, Martin Langhoff wrote:
> Do the indexes have enough info to use them with http ranges? It'd be
> chunkier than a smart protocol, but it'd still work with dumb servers.
It would not be really performant, would it? Besides, not all Web servers
speak HTTP/1.1...

Performant compared to downloading a huge packfile to get 10% of it?
Sure! It'd probably take a few trips, and you'd end up fetching 20% of
the file, still better than 100%.

Besides, not all Web servers speak HTTP/1.1...

Are there any interesting webservers out there that don't? Hand-rolled
purpose-built webservers often don't but those don't serve files, they
serve web apps. When it comes to serving files, any webserver that is
supported (security-wise) these days is HTTP/1.1.

And for services like SF.net it'd be a safe low-cpu way of serving git
files. 'cause the git protocol is quite expensive server-side (io+cpu)
as we've seen with kernel.org. Being really smart with a cgi is
probably going to be expensive too.

cheers,


m
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux