I have configured a 2 node cluster with RHEL 5.2, shared storage and GFS2.
I have configured several services with our company own software. This software evolves fast because we are in active development, so sometimes cores are dumped. When this happens, the cluster tries to restart the failing service again and again...filling the service's filesystem with cores.
Is there any way to limit the number of retries for a certain service?
Thanks in advance,
Juan Ramón Martín Blanco
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster