Hi, On Tue, 12 Dec 2006, Linus Torvalds wrote: > On Tue, 12 Dec 2006, Johannes Schindelin wrote: > > On Tue, 12 Dec 2006, Nicolas Pitre wrote: > > > > > On Tue, 12 Dec 2006, Johannes Schindelin wrote: > > > > > > > But it would become a non-problem when the HTTP transport would learn > > > > to read and interpret the .idx files, basically constructing thin > > > > packs from parts of the .pack files ("Content-Range:" comes to > > > > mind)... > > > > > > Woooh. > > > > Does that mean "Yes, I'll do it"? ;-) > > Umm. I hope it means "Woooh, that's crazy talk". > > You do realize that then you need to teach the http-walker about walking > the delta chain all the way up? For big pulls, you're going to be a lot > _slower_ than just downloading the whole dang thing, because the delta > objects are often just ~40 bytes, and you've now added a ping-pong latency > for each such small transfer. Two points: - For loose objects, the HTTP walker does exactly that. This is the normal case for "just a few objects since the last fetch". It will _never_ be the case for the initial clone! - Usually, the object fetching can be parallelized, because you want multiple objects which are in disjunct delta chains. And for these, you can say something like "Content-Range: 15-31,64-79,108-135" IIRC. You could even fetch sensible chunks, i.e. cut only at multiples of 512 to make the transport more efficient, and only fetch the parts which are _still_ missing. So, a crazy idea, yes. But a feasible one. Just not crazy enough to be tempting for me (I use the git protocol whenever possible, too). Ciao, Dscho - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html