Re: Smart fetch via HTTP?

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Fri, May 18, 2007 at 10:01:52 +0100, Johannes Schindelin wrote:
> Hi,
> 
> On Thu, 17 May 2007, Jan Hudec wrote:
> 
> > On Thu, May 17, 2007 at 10:41:37 -0400, Nicolas Pitre wrote:
> >
> > > And if you have 1) the permission and 2) the CPU power to execute such 
> > > a cgi on the server and obviously 3) the knowledge to set it up 
> > > properly, then why aren't you running the Git daemon in the first 
> > > place?  After all, they both boil down to running git-pack-objects and 
> > > sending out the result.  I don't think such a solution really buys 
> > > much.
> > 
> > Yes, it does. I had 2 accounts where I could run CGI, but not separate 
> > server, at university while I studied and now I can get the same on 
> > friend's server. Neither of them would probably be ok for serving larger 
> > busy git repository, but something smaller accessed by several people is 
> > OK. I think this is quite common for university students.
> 
> 1) This has nothing to do with the way the repo is served, but how much 
> you advertise it. The load will not be lower, just because you use a CGI 
> script.

That won't. But that was never the purpose of "smart cgi". The purpose was to
minimize the bandwidth usage (and connectivity is still not so cheap that
you'd not care) while still working over http either because the users need
to access it from behind firewall or because administrator is not willing to
set up git-daemon for you, while CGI you can run yourself.

> 2) you say yourself that git-daemon would have less impact on the load:

NO, I didn't -- at least not in the paragraph below.

In the below paragraph I said, that *network* use will never be as good with
*dumb* solution, as it can be with smart solution, no matter whether it is
over special protocol or HTTP.

---

Of course it would be less efficient in both CPU and network load, because
there is the overhead of the web server and overhead of the http headers.

Actually I like the ranges solution. If accompanied with repack stategy that
does not pack everything together, but instead creates packs of limited
number of objects -- so that the indices don't exceed configurable size, say
64kB -- could not so much less efficient for the network and have the
advantage of working without ability to execute CGI.

> > > [...]
> > >
> > > Et voilà.  Oh, and of course update your local refs from the 
> > > remote's.
> > > 
> > > Actually there is nothing really complex in the above operations. And 
> > > with this the server side remains really simple with no special setup 
> > > nor extra load beyond the simple serving of file content.
> > 
> > On the other hand the amount of data transfered is larger, than with the 
> > git server approach, because at least the indices have to be transfered 
> > in entirety.

-- 
						 Jan 'Bulb' Hudec <bulb@xxxxxx>

Attachment: signature.asc
Description: Digital signature


[Index of Archives]     [Linux Kernel Development]     [Gcc Help]     [IETF Annouce]     [DCCP]     [Netdev]     [Networking]     [Security]     [V4L]     [Bugtraq]     [Yosemite]     [MIPS Linux]     [ARM Linux]     [Linux Security]     [Linux RAID]     [Linux SCSI]     [Fedora Users]

  Powered by Linux