[Yum] More suggestions...

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



I realized I suggested pushing from the trusted server, but in fact the =
Infrastructures web site has a strong preference for pull over push.  In =
a properly set up environment, you can have junior admins with no real =
privs setting up new systems, booting them up, leaving for dinner, and =
when they come back, the new systems are running as part of the greater =
whole, no hand-tuning or custom cobbling required.  :)

I don't really think of it as top-down management.  The environment can =
be as rigid or as flexible as required.  Setting up an rsync server with =
standard config files to be slurped down to the clients is one approach. =
 "These are the files we use which will make your life easier."  ;)

jc


> -----Original Message-----
> From: Robert G. Brown [mailto:rgb@xxxxxxxxxxxx]
> Sent: Thursday, June 05, 2003 1:57 PM
> To: Carroll, Jim P [Contractor]
> Cc: seth vidal; yum mailing list
> Subject: RE: [Yum] More suggestions...
>=20
>=20
> On Thu, 5 Jun 2003, Carroll, Jim P [Contractor] wrote:
>=20
> > Are you suggesting that the RPM be rebuilt to accomodate a different
> > yum.conf across the various hosts on the LAN?  I don't mean a unique
> > yum.conf for each host, simply a common yum.conf across all hosts.
> >
> > If this is what you're suggesting, I suppose you could do=20
> it that way.
> > I wouldn't.  I would push yum.conf from the trusted gold=20
> server, or make
> > it part of a kickstart postinstall, or manage it through=20
> cfengine, or
> > through various other mechanisms.  (Ref:  www.infrastructures.org )
>=20
> > If I've misunderstood you, just bump this to /dev/null.  :)
>=20
> No, all of those are perfectly reasonable ways to do things=20
> also -- it's
> just that there is (in the might words of perlism) more than=20
> one way to
> do it, and different needs being met.
>=20
> I was suggesting that there are lots of ways one might want=20
> to customize
> a LAN yum-updating from a locally built and maintained server (we've
> just seen a short list of them on the list:-), and that nearly all of
> them -- well they don't quite "require" that you work from the tarball
> rather than the rpm, and/or build a yum rpm for each
> distribution/architecture repository, but it is one of the more
> straightforward ways (a way that requires no additional tools=20
> or control
> of the client systems).
>=20
> At Duke, we do indeed build and distribute a yum rpm inside=20
> each of the
> distributions we locally build and support for duke-only distribution
> (a private campus-only server, not the mirror or public dulug ftp
> sites).  This rpm is preconfigured to update, via cron, from the right
> server (the one it installed from) and the right path (the one it
> installed from) on the right schedule (during the time frame=20
> selected as
> suitable for a nightly update, shuffled (to prevent server overload
> during that interval).
>=20
> That way anyone who installs from those servers can be a=20
> complete novice
> without the faintest clue about what yum is or does, and their system
> will STILL automagically update itself every night unless/until the
> system's owner becomes smart enough (and stupid enough:-) to stop it.
> This makes the campus security officer happy -- well, happier, at any
> rate -- and requires NO CENTRALIZED PRIVILEGES on the owner's system.
>=20
> I think you are thinking in the context of topdown management=20
> where you
> control all of the systems in question, which is fine and=20
> common enough,
> but one of yum's very powerful features is that it is a client-pull
> tool, NOT a push tool, and hence facilitates distributed=20
> management in a
> "confederation of semi-autonomous enterprises" model that=20
> describes (for
> example) a lot of University campuses.  Like ours.  In this model, the
> person who manages the toplevel campus repositories (Seth)=20
> does NOT have
> root control of 80% of the systems that use that facility, or quite a
> few of the secondary repositories that overlay local rpm's on=20
> top of the
> campus-wide base.
>=20
> I think that he would hurt anybody who suggested that he be given that
> kind of control -- and responsibility.  I personally am not=20
> worried, as
> by now he's probably going to hurt me anyway.  But that is why I was
> suggesting that in many/most cases someone setting up a yum repository
> will want to rebuild the yum rpm -- it's just an easy way to=20
> arrange it
> so that the people who install from that repository automagically will
> yum update from it as well, in a locally controlled manner.
>=20
>    rgb
>=20
> Robert G. Brown	                      =20
http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb@xxxxxxxxxxxx





[Index of Archives]     [Fedora Users]     [Fedora Legacy List]     [Fedora Maintainers]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]

  Powered by Linux