[Yum] deploying and maintaining linux networks howto

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On Thu, 24 Apr 2003, Carroll, Jim P [Contractor] wrote:

> > Rob Brown, who posted here earlier has put together a good 
> > article about
> > scalable system development. I think we've done a good job of
> > implementing a pretty scalable system based on Red Hat Linux, 
> > kickstart,
> > pxe and yum at duke, particularly in the physics department. 
> 
> I'd be interested in seeing that article.  I'd also be interested in learning how to dovetail pxe into our environment.

Coming soon in Linux Magazine.  I hesitate to distribute it too broadly
before publication, but there is a link on the brahma page under
"resources" to the support stuff I put up to help the article out
(articles have word limits, but I don't seem to:-).

I'm hoping I can post the article text as submitted (in a crude markup,
basically) under my standard OPL or a slightly modified OPL after the
magazine appears, but that'll be up the the LM editor -- I haven't
asked.  They ARE paying me for the article, though, so they're probably
going to have some licensing issues as well.

I do think that the new HOWTO is functionally very similar to what the
article suggests anyway -- install a cluster (or a LAN) via DHCP
(possibly augmented by PXE if your NICS are capable), using a kickstart
file customized for beowulf nodes (if applicable), then use yum to keep
their installation current against a regularly updated repository.  This
methodology (developed locally by Seth Himself) keeps systems
installation and maintenance time down to close to the theoretical
minimum.  It isn't uncommon out in the world, either.  I've communicated
with beowulf people who've installed linux on an entire 64 node cluster
over gigabit links in something absurd, like eleven minutes start to
finish.  Even allowing for ten minutes plus per node and using a boot
floppy instead of PXE, one can do lots of nodes in parallel and install
sixteen nodes in about thirty minutes (I've done that one myself, and
could probably beat that easily now, ON PROVEN HARDWARE).

> > The generalized case is easy though, you must get a point where you
> > create a certain economy of scale for the systems you're maintaining.

> There's been some discussion on the Infrastructures mailing list about
> being able to reliably recreate a system using a properly created
> infrastructure.  Certain points such as order dependency were brought
> up.  That is, 2 systems might have different (x)inetd setups, at least
> as far as MD5 on the config file(s) is concerned.  However, they may be
> functionally identical.

I think Seth manages that sort of thing by maintaining a DB of "system
identities" that are reloaded when a system is reinstalled.  But he's
probably already told you about that -- I personally went fishin' today
and am just catching up on mail from this morning.

   rgb

> jc
> 
> > > BTW, is there anyplace one can find a cfengine RPM?
> > 
> > is cfengine still being maintained?
> > 
> > -sv
> > 
> > 
> > 
> > 
> _______________________________________________
> Yum mailing list
> Yum@xxxxxxxxxxxxxxxxxxxx
> https://lists.dulug.duke.edu/mailman/listinfo/yum
> 

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb@xxxxxxxxxxxx





[Index of Archives]     [Fedora Users]     [Fedora Legacy List]     [Fedora Maintainers]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]

  Powered by Linux