On Thu, 2008-11-13 at 10:56 +0100, Juan Ramon Martin Blanco wrote: > First of all, hello and many thanks everyone, this list has helped me > a lot in the cluster world ;) > > I have configured a 2 node cluster with RHEL 5.2, shared storage and > GFS2. > I have configured several services with our company own software. This > software evolves fast because we are in active development, so > sometimes cores are dumped. When this happens, the cluster tries to > restart the failing service again and again...filling the service's > filesystem with cores. > Is there any way to limit the number of retries for a certain service? <service max_restarts="x" restart_expire="y" .../> max_restarts="x" * Maximum tolerated. Ex: 3 means the *4th* restart will fail restart_expire="y" * After this # of seconds time, a restart is forgotten. -- Lon -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster