Martin Langhoff wrote:
On 12/11/06, Linus Torvalds <torvalds@xxxxxxxx> wrote:
Sure, if the proxies actually do the rigth thing (which they may or may
not do)
For a high-traffic setup like kernel.org, you can setup a local
reverse proxy -- it's a pretty standard practice. That allows you to
control a well-behaved and locally tuned caching engine just by
emitting good headers.
It beats writing and maintaining an internal caching mechanism for
each CGI script out there by a long mile. It means there'll be no
further tunables or complexity for administrators of other gitweb
installs.
If gitweb produced cache-friendly headers, squid could definitely serve
as an HTTP front-end ("HTTP accelerator" mode in squid talk).
In fact, given kernel.org's slave1/slave2<->master setup, that's a
pretty natural fit for caching files and/or cache-aware CGI output.
You could even replace rsync to the slaves, if squid was serving as the
front-end accelerator running on the slaves, communicating to the master.
squid is smart enough to hold off a thundering herd, and only pulls
single cacheable copies of files as needed.
Jeff
-
To unsubscribe from this list: send the line "unsubscribe git" in
the body of a message to majordomo@xxxxxxxxxxxxxxx
More majordomo info at http://vger.kernel.org/majordomo-info.html