On Tue, Jan 01, 2008 at 10:12:28 -0800, Jakub Narebski wrote: > Grégoire Barbier <gb@xxxxxxxxxxxx> writes: > > > I think that real HTTP support is better than all workarounds we > > will be able to find to get through firewalls (when CONNECT is not > > available, some awful VPNs that send Etherne over HTTP may work > > ;-)). That's why I'm ok to work several hours on git code to > > enhance real HTTP(S) support. > > There was also an idea to create a CGI program, or enhance gitweb > to use for pushing. I don't know if it would be better way to pursue > to work around corporate firewalls, or not... It is what bzr and mercurial do and I think it would be quite good way to go for cases like this. Eg. while our corporate firewall does allow anything through connect on 443 (so I can use ssh that way), it does *not* support web-dav in non-ssl mode. So I eg. can't even get from public subversion repositories at work. I have also thought about optimizing download using CGI, but than I thought, that maybe there is a way to statically generate packs so, that if the client wants n revisions, the number of revisions it downloads is O(n) and the number of packs it gets them from (and thus number of round-trips) is O(log(n)). Assuming the client always wants everything up to the tip, of course. Now this is trivial with linear history (pack first half, than half of what's left, etc., gives logarithmic number of packs and you always download at most twice as much as you need), but it would be nice if somebody found a way (even one that satisfies the conditions on average only) to do this with non-linear history, it would be very nice improvement to the http download -- native git server optimizes amount of data transfered very well, but at the cost of quite heavy CPU load on the server. -- Jan 'Bulb' Hudec <bulb@xxxxxx> - To unsubscribe from this list: send the line "unsubscribe git" in the body of a message to majordomo@xxxxxxxxxxxxxxx More majordomo info at http://vger.kernel.org/majordomo-info.html