[Yum] gpg public keys

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



On 6 Mar 2003, seth vidal wrote:

> Hi,
>  As some of you know gpg pkg signing, etc was changed in rpm 4.1 and
> beyond.
> 
> Now keys are stored in the rpmdb, they're dealt with by beecrypt, they
> don't require gpg, etc etc.
> 
> But, if you do an rpm -qa you'll see them in your list.
> 
> I'm thinking about screening them out of the yum list output.
> Also thinking about making it so key imports could be handled via the
> yum command line. (not sure about this one, yet)
> 
> I thought it might look cleaner if you don't have a bunch of gpg-pubkey
> things sitting in your yum list output.

Is there any way to fully encapsulate gpg keychecking?  As in, have yum
always check gpg signatures and never tell you about it unless they fail
to match?  Or is there something chicken-and-eggish about this...

Forgive me if it already does this.

As for output in yum list, a column with Y/N/NA in it for yes, gpg key
checks, no it fails, or not applicable, no key available might be a
decent option to have, as might a flag to tell it to list only packages
whose key is n,na, or just n.  Those commands might play a useful role
in a security audit or a what's wrote with this damn system audit,
presuming of course that one can trust yum itself on a compromised
system.

BTW, I sat and suffered through an hour of the aduva demo, and now fully
understand their product.  I present a BRIEF summary for the benefit of
the list.

The product in question was a generalized package/revisioning management
scheme, not unlike yum in purpose and scope, designed to go into
topdown/centralized management linux environments, especially those that
I would call "a bloody mess".

The Aduva toolset, OnStage, works as follows:

  a) Aduva maintains a centralized repository of RPMs that, AFAICT, is
the union of all the major distributions that use RPMs.  My general
feeling was that they use RH as their central base, but are nevertheless
essentially distribution agnostic.

  b) They install and run mixes of RPMs (ACROSS DISTRIBUTIONS) on a
heavy duty testbed LAN and build a massive dependency/compatibility
table.  I believe that they rebuild RPMs "on demand" to resolve
dependency/compatibility problems for clients or to insert client RPMs
into their tables.  They also have an automated kernel rebuilding tool
so that num-nums can (within reason) configure and build a custom kernel
off any particular base.

  c) They provide their clients with a GUI tool to run on a management
console/server.  This server autogenerates a cached archive (drawn from
the Aduva repository) of the packages and dependency tables required by
the organization.

  d) All client systems get a root-privileged daemon that can be
accessed (over ssl) from the server.  This daemon is basically
equivalent to yum in very crude terms EXCEPT that it is run from the
central server instead of the client (topdown).  It can run scripts,
gather information on all installed rpm's, get rpms, remove rpms,
generally groom a distribution according to the master console's
directives.

  e) The master console provides client views and task views that are
something VERY MUCH LIKE what we were discussing as a GUI encapsulations
of yum, and of course were nearly feature identical with yum itself.
With a few clicks one could see what a client needed to update itself to
"current", accept or reject the proposed RPM's, initiate an update.
About what one would expect.

  f) The master console could do a few other thingies -- clone a server
system (make a copy of its rpm list, then push it onto a client with
some degree of intelligence wrt /etc configuration, although I gathered
from my questions that this wasn't likely to be foolproof if it worked
at all).  It did not provide encapsulated node/client dhcp/pxe or
floppy/kickstart or whateverthehellyoulike installation -- it did not
seem to be an installation tool.  It did give one a GUI to control
cronly automation of updates and so forth.

The PRIMARY ADVANTAGE of the tool over what we have here at Duke already
is its ability to rationally manage bloody mess LANs, specifically ones
that have a RH 6.2 workstation next to a RH 7.3 workstation next to a
Mandrake system next to a SUSE system, with some SUSE RPMs installed on
the RH boxes and the need to use RPMs built on the Mandrake box on all
of the above.  Their master database resolves dependencies and
functionalities much more broadly than any single distribution.  This
is, as they openly admitted, their primary added value (along with the
GUI itself and the ability for one person to control revisioning and
updates throughout a topdown organization).

This could clearly be of value to organizations that a) have topdown,
centralized management; b) through incompetence (most likely) or because
of cost/difficulty porting or rebuilding mission critical applications,
have to run linux across versions and/or distributions.

Neither of these describes a typical University; quite the opposite. The
thought of running a root privileged daemon controllable from a single
campus master on every system on campus at Duke both makes me shudder
(the horror, the horror!) and laugh uncontrollably at the same time.
There would be mobs carrying torches and pitchforks at the very
suggestion, and I'd be right out front.

Then there is the Evil of mindless heterogeneity and distribution
crossover -- obviously nobody with GOOD management skills would ever do
something so stupid at an institutional level, although one can easily
see how things evolve into that state in a fairly chaotic, decentralized
environment or in one that is incompetently managed.  Still, there DO
occur cases where somebody ends up with a system being "frozen" (at
least for a time) at some distribution because e.g. libc changes
dramatically and it would cost a great deal to port a critical app to
the new version, and there are always cases where a particular package
has been packaged for Mandrake but you want to run it under RH, and a
simple rpm --rebuild fails (leaving you screwing around with figuring
out what it needs and how, or whether, to provide it).  Their tool is
likely to be be of SOME help in the incompetently managed, centralized
organizations (hiding and minimizing the impact of the bad management
practices) and is at least a different, possibly cheaper, pathway to
interoperability in a well-managed environment (where they would most
likely freeze a distribution on certain systems or slog through the rpm
ports now).

Needless to say I pointed out to them that root-privileged execution
daemons, ssl-authenticated or not, were Dark Evil of the grimmest sort
and anathema in any decentralized environment (like Duke) where even the
great and noble Sethbot has neither privilege nor interest in HAVING
privilege (and the attendent responsibility) outside of his private
demesne, where the Duke Hospital has significant legal constraints on
who has root access to systems that might well freely use yum (as a
CLIENT SIDE tool), that the proper solution to mindless heterogeneity is
to eliminate it (where possible), and that all of these things place
fairly strong constraints on their likely market.  Perhaps they'll sell
to undermanaged corporations (and provide decent value there).  Perhaps
they'll find at least a small market at places like fermilab, where in
years past they've showed a bit of a penchant for trailing distributions
due to the porting problem for some of the MC apps (although I think
they've been better lately, right Seth?).  They'll never sell here.

They did have a bit of difficulty with the open source issue -- I tried
pointing out that their real added value was the centralized server,
testing, and dependency resolution process, together with supported task
encapsulation tools and training for those local sysadmins, but alas,
they still think of themselves as selling software, and hence are doomed
to misery and despair as yum and other OS tools slice their market out
from under them for free.  They did seem to listen, though, to my (one
and only) free consulting advice concerning topdown vs decentralized
management.  If they want more advice, I think I'll make them pay for
it.  OS solutions are worth contributing to for all the usual reasons;
closed source companies need to pay, pay, pay.

   rgb

-- 
Robert G. Brown	                       http://www.phy.duke.edu/~rgb/
Duke University Dept. of Physics, Box 90305
Durham, N.C. 27708-0305
Phone: 1-919-660-2567  Fax: 919-660-2525     email:rgb@xxxxxxxxxxxx






[Index of Archives]     [Fedora Users]     [Fedora Legacy List]     [Fedora Maintainers]     [Fedora Desktop]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite News]     [KDE Users]

  Powered by Linux