Re: git over webdav: what can I do for improving http-push ?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Jan Hudec a écrit :
On Tue, Jan 01, 2008 at 10:12:28 -0800, Jakub Narebski wrote:
Grégoire Barbier <gb@xxxxxxxxxxxx> writes:

I think that real HTTP support is better than all workarounds we
will be able to find to get through firewalls (when CONNECT is not
available, some awful VPNs that send Etherne over HTTP may work
;-)).  That's why I'm ok to work several hours on git code to
enhance real HTTP(S) support.
There was also an idea to create a CGI program, or enhance gitweb
to use for pushing. I don't know if it would be better way to pursue
to work around corporate firewalls, or not...
I subscribe to this point of view.
I will look at the list archive to search for what has been said before about this.

It is what bzr and mercurial do and I think it would be quite good way to go
for cases like this.
Ok, I will have to look at bzr and mercurial...

 Eg. while our corporate firewall does allow anything
through connect on 443 (so I can use ssh that way), it does *not* support
web-dav in non-ssl mode. So I eg. can't even get from public subversion
repositories at work.

I have also thought about optimizing download using CGI, but than I thought,
that maybe there is a way to statically generate packs so, that if the client
wants n revisions, the number of revisions it downloads is O(n) and the
number of packs it gets them from (and thus number of round-trips) is
O(log(n)). Assuming the client always wants everything up to the tip, of
course. Now this is trivial with linear history (pack first half, than half
of what's left, etc., gives logarithmic number of packs and you always
download at most twice as much as you need), but it would be nice if somebody
found a way (even one that satisfies the conditions on average only) to do
this with non-linear history, it would be very nice improvement to the http
download -- native git server optimizes amount of data transfered very well,
but at the cost of quite heavy CPU load on the server.
Well... frankly I don't think I'm able of such things.
Writing a walker over webdav or a simple cgi is a thing I can do (I think), but I'm not tought enough (or not ready to take the time needed) to have a look on the internals of packing revisions (whereas I can imagine it would means that "my" walker would be suitable only for small projects in terms of code amount and commit frequency).

I had a quick look on bzr and hg, and it seems that bzr use the easy way (walker, no optimizations) and hg a cgi (therefore, maybe optimizations). By quick look I mean that I sniff the HTTP queries on the network during a clone. I need to look harder...

BTW I never looked at the git:// protocol. Do you think that by tunneling the git protocol in a cgi (hg uses URLs of the form "/mycgi?cmd=mycommand&...", therefore I think "tunnel" is not a bad word...) the performance would be good? Maybe it's not that hard to write a performant HTTP/CGI protocol for Git if it's based upon existing code such as the git protocol.

--
Grégoire Barbier - gb à gbarbier.org - +33 6 21 35 73 49

-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at  http://vger.kernel.org/majordomo-info.html

[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux