On Thu, 5 Jun 2003, Carroll, Jim P [Contractor] wrote: > Are you suggesting that the RPM be rebuilt to accomodate a different > yum.conf across the various hosts on the LAN? I don't mean a unique > yum.conf for each host, simply a common yum.conf across all hosts. > > If this is what you're suggesting, I suppose you could do it that way. > I wouldn't. I would push yum.conf from the trusted gold server, or make > it part of a kickstart postinstall, or manage it through cfengine, or > through various other mechanisms. (Ref: www.infrastructures.org ) > If I've misunderstood you, just bump this to /dev/null. :) No, all of those are perfectly reasonable ways to do things also -- it's just that there is (in the might words of perlism) more than one way to do it, and different needs being met. I was suggesting that there are lots of ways one might want to customize a LAN yum-updating from a locally built and maintained server (we've just seen a short list of them on the list:-), and that nearly all of them -- well they don't quite "require" that you work from the tarball rather than the rpm, and/or build a yum rpm for each distribution/architecture repository, but it is one of the more straightforward ways (a way that requires no additional tools or control of the client systems). At Duke, we do indeed build and distribute a yum rpm inside each of the distributions we locally build and support for duke-only distribution (a private campus-only server, not the mirror or public dulug ftp sites). This rpm is preconfigured to update, via cron, from the right server (the one it installed from) and the right path (the one it installed from) on the right schedule (during the time frame selected as suitable for a nightly update, shuffled (to prevent server overload during that interval). That way anyone who installs from those servers can be a complete novice without the faintest clue about what yum is or does, and their system will STILL automagically update itself every night unless/until the system's owner becomes smart enough (and stupid enough:-) to stop it. This makes the campus security officer happy -- well, happier, at any rate -- and requires NO CENTRALIZED PRIVILEGES on the owner's system. I think you are thinking in the context of topdown management where you control all of the systems in question, which is fine and common enough, but one of yum's very powerful features is that it is a client-pull tool, NOT a push tool, and hence facilitates distributed management in a "confederation of semi-autonomous enterprises" model that describes (for example) a lot of University campuses. Like ours. In this model, the person who manages the toplevel campus repositories (Seth) does NOT have root control of 80% of the systems that use that facility, or quite a few of the secondary repositories that overlay local rpm's on top of the campus-wide base. I think that he would hurt anybody who suggested that he be given that kind of control -- and responsibility. I personally am not worried, as by now he's probably going to hurt me anyway. But that is why I was suggesting that in many/most cases someone setting up a yum repository will want to rebuild the yum rpm -- it's just an easy way to arrange it so that the people who install from that repository automagically will yum update from it as well, in a locally controlled manner. rgb Robert G. Brown http://www.phy.duke.edu/~rgb/ Duke University Dept. of Physics, Box 90305 Durham, N.C. 27708-0305 Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb@xxxxxxxxxxxx