Re: HOWTO unmaintained?

Linux Advanced Routing and Traffic Control

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



> It looks like a number of people are offering sites -
> IMHO, a "distributed" wiki (ie: you can edit at any of
> the sites) or a master/mirror setup would be good, as
> that would help prevent problems if site maintainers
> get kidnapped by aliens, sites get slashdotted, etc.

I think the Wiki, if that route is chosen, should be on the www.lartc.org domain name.  This means that we will have to find and contact the administrators of that domain / DNS servers.  As far as the distributed web site goes I think it is a good idea.  To pull off the distributed site we would need to have the DNS records resolve to multiple boxen across the net.  I have considered a self replicating set up for some of my servers and at present I'm looking at using Coda or AFS as a replicating  / caching local copies of the remote file system content.  I've never dealt with Wikis other than and end user (and I say that the ones that I've looked at have been slow) so I don't know what they take to set up.  I suspect that they use a database and thus we would want to set up the Wiki to use a database that has real time replication between the two (or more) web servers that the wiki points to.  I would be more than happy to help with such an endeavor.  I can not host it at my 
office (bosses will not let me) but I can help provide content and / or convert stuff.

> It would also be good if at least one site offered
> multiple ways to connect - eg: via an IPSec tunnel or
> via IPv6 - as this would give people a simple way of
> testing what they're trying.

Again I am not able to do this, but I think it could be relatively easily done by offering a host with multiple IPs bound to it and give people a UML that they can test things in.  Much of the routing / firewalling work that I have done can easily be done in side of a UML.  This would mean that a system would need to be fairly capable and running a UML it's self to be a router in to the UML farm / UML switch backplane.  Again I would be more than willing to help set up such a system (and enjoy it at that).  I think it would be interesting to do this with multiple distributions and possibly versions there of.  To pull this off the box would need to be fairly powerful though to support many people at one time.  I'd say that you could get away with a dual multi GHz proc box with at least 2 - 4 GB of RAM.  I would expect that this could support 10+ concurrent users in side of UML doing some compiling or more if they are just using recompiled binaries.



Grant. . . .
_______________________________________________
LARTC mailing list
LARTC@xxxxxxxxxxxxxxx
http://mailman.ds9a.nl/cgi-bin/mailman/listinfo/lartc

[Index of Archives]     [LARTC Home Page]     [Netfilter]     [Netfilter Development]     [Network Development]     [Bugtraq]     [GCC Help]     [Yosemite News]     [Linux Kernel]     [Fedora Users]
  Powered by Linux