Stéphane Di Cesare wrote: > I have an application that must be installed on central storage, to be > used by a number of machines. All machines are guaranteed to have the > same architecture. > The configuration for this application depends on the role of the > machine, and is located on a specific directory for each (but still on > central storage: more or less, /central/storage/$HOST). > > The questions are: is it a good idea to use RPM in that case? If so, > what is the best way to set this up? Sure, if you want. Using a separate database just for that directory would probably be best because you will not be able to make use of dependencies installed on distributed machines so best to simply use a dedicated rpm database. This is almost the same issue/problem as when using rpm to manage files on a non-rpm host such as hpux or solaris except in this case the underlying host system is probably already rpm based so I would use a separate rpm database to avoid confusion with the underlying host system. rpm --dbpath /central/storage/rpm_db or rpm --dbpath /central/storage/$HOST/rpm_db > My approach would be the following: > - one "core" package for the software itself, without configuration. > (app.rpm) > - several configuration packages for each role (app-config-a.rpm, > app-config-b.rpm, etc), which require the core package. That can work. Having the configuration completely in rpm too can be a nice way to track changes. If you are the only one operating the system this is fine. But... Making configuration file changes through rpm can also be tedious and require a significant overhead. Most people who follow you in your task will probably find this too much work and balk at it (personal experience here) and so I would suggest not doing this for the configuration. I migrated to using a set of shell scripts to make the configuration changes to the config files outside of the package. I keep the scripts in version control. When I want to make changes I change the scripts which in turn edit the configuration files. All changes are logged in version control so a history is kept. The scripts automate setting up new hosts so new systems are always handled the same way each time. > The core package is installed from one machine (rpm -i app.rpm). The > RPM database is then updated from all other machines (rpm -i --justdb > app.rpm), and the config packages are applied afterwards. This idea I do not like very much. I think that trying to have half through the shared app and half through the distributed machines to be a recipe for problems. But you did not say quite enough to really know what you are suggesting here. I would avoid trying to make applications installed into the central server area (over NFS?) be known to the local machines package management. Is that what you are trying to do? I would simply know that they are installed on the central server and that is where the package management tools manage them. > Using --justdb is of course not an ideal solution, but everything else > I can think about is even worse. I am not familiar with relocating the > RPM database, but there is a possibility that the machines might have > local software installed, so I cannot rely on having the RPM database > completely on central storage. I am still not clear why you want to do --justdb at all in this case. If the applications are maintained on a central server then they need to be installed there exactly once and the database to manage them updated at the same time. The clients who access the central storage will have the updated files all at the same time and do not need anything more to happen on them. If you are thinking that the clients need to be updated then using central storage is probably not the right direction for you. Because if the files are packaged up then they are trivial to install on the clients using the package. Then the files are installed and the client's rpm database is updated together. > Has anyone got a better idea, or just comments? Once you have gone to the trouble to create an rpm package then using central storage for it starts to be questionable. As soon as you have an rpm package it is so easy to install on the client hosts (and various network file system issues completely avoided) that simply installing it on the distributed hosts makes a lot of sense. The distributed client hosts can be kept up to date with apt, yum, autorpm, radmin, many other possibilities. Bob _______________________________________________ Rpm-list mailing list Rpm-list@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/rpm-list