Johannes Schindelin <Johannes.Schindelin@xxxxxx> writes: > Hi, > > On Thu, 17 May 2007, Martin Langhoff wrote: > >> On 5/16/07, Johannes Schindelin <Johannes.Schindelin@xxxxxx> wrote: >> > On Wed, 16 May 2007, Martin Langhoff wrote: >> > > Do the indexes have enough info to use them with http ranges? It'd be >> > > chunkier than a smart protocol, but it'd still work with dumb servers. >> > It would not be really performant, would it? Besides, not all Web servers >> > speak HTTP/1.1... >> >> Performant compared to downloading a huge packfile to get 10% of it? >> Sure! It'd probably take a few trips, and you'd end up fetching 20% of >> the file, still better than 100%. > > Don't forget that those 10% probably do not do you the favour to be in > large chunks. Chances are that _every_ _single_ wanted object is separate > from the others. FYI, bzr uses HTTP range requests, and the introduction of this feature lead to significant performance improvement for them (bzr is more dumb-protocol oriented than git is, so that's really important there). They have this "index file+data file" system too, so you download the full index file, and then send an HTTP range request to get only the relevant parts of the data file. The thing is, AAUI, they don't send N range requests to get N chunks, but one HTTP request, requesting the N ranges at a time, and get the N chunks a a whole (IIRC, a kind of MIME-encoded response from the server). So, you pay the price of a longer HTTP request, but not the price of N networks round-trips. That's surely not as efficient as anything smart on the server, but might really help for the cases where the server is /not/ smart. -- Matthieu - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html