Re: Cron Jobs

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hi

I think it is relatively easy to make cronjobs "cluster safe" . For
shell scripts I do it the following way (the job should run on the same
node the service mysqld is running):

# Who am I?
THIS_NODE=$(/usr/sbin/clustat | grep Local | awk '{print $1}')

# On which node is the service running the cronjob depends?
RUN_NODE=$(/usr/sbin/clustat | grep mysqld | awk '{print $2}')

if [ $THIS_NODE != $RUN_NODE ]
   then
   echo "ERROR! Wrong cluster node."
   echo "This shall run from the same node where mysql is running."
   exit 1
fi

This works fine for me for a lot of cronjobs.

Martin Fürstenau

Senior System Engineer • Océ Printing Systems GmbH





On Tue, 2010-03-30 at 23:48 +0000, Jankowski, Chris wrote:
> Hi,
> 
> 1.
> >>>yeah, my first inkling was to symlink /etc/cron.daily but that breaks so much existing functionality.
> 
> I was actually thinking about /var/spool/cron/crontabs directory.  You can put your cron definitions there in the old UNIX style. It works perfectly well and is more general and flexible then the /etc/cron.* files, I believe.
> 
> 2.
> >>>I followed you until you spoke of remote nodes? What exactly do you have in mind?
> 
> I implemented that approach in my old failover scripts for Digital/Compaq/HP TruCluster. Attached is a README file for this functionality. This will give you the concepts, although there are bits there that are TruCluster specific like CDSLs. If you are interested I am happy to share the scripts from which you can extract the relevant code and modify it for your needs.
> 
> Regards,
> 
> Chris
> 
> --------------
> 
> #
> #
> #	Crontab file management.
> #	------------------------
> #
> #	There is a need for having a schedule of commands for cron
> #	that is active on a node only when the service is running
> #	on this node.
> #	In other words, certain commands must be scheduled only
> #	when the service is running and only on the node on which
> #	the service is running.
> #
> #	One way to implement it would be to modify every such command
> #	to check for the presence of the service on the node on which
> #	the command is run. This will be quite cumbersome if there
> #	is large number of such commands.
> #
> #	Another way to achieve execution of commands dependent
> #	on presence of a service would be by writing a jacket script
> #	taking as arguments the name of service in question and the
> #	pathname of the script to be executed and its arguments.
> #
> #	The implementation here takes advantage of the fact that service
> #	specific cron processing is commonly done by a certain user or users
> #	and that crontab(1) maintains a separate configuration file for each
> #	user. Thus, it is relatively easy to manipulate the crontab file 
> #	of such a user.
> #
> #	A directory is chosen eg. /usr/local/crontabs.
> #
> #	This directory contains templates of crontab files for users that
> #	are associated with certain services in a sense that the cron jobs
> #	for such a user are to be run only on the node on which this service
> #	is running.
> #
> #	The script starting the service will install the template as the
> #	crontab file for such a user on startup of the service.
> #
> #	The template of the crontab file should be named by the username
> #	with the extension service_on.
> #
> #	Eg. for a user "fred" and chosen extension ".service_on" the template
> #	should be named:
> #
> #	fred.service_on
> #
> #	Typically, by convention, the name of the CAA application resource
> #	will be used as the "service" string in the extension.
> #
> #	The contents of the template will be active on the member running
> #	the service for the lifetime of the service.
> #
> #	On a graceful shutdown of the service the script will install
> #	another template of the crontab file for the user.
> #
> #	This template of the crontab file should be named by the username
> #	with a predefined extension.
> #
> #	Eg. for a user "fred" and chosen extension ".service_off" the template
> #	should be named:
> #
> #	fred.service_off
> #
> #	Typically, by convention, the name of the CAA application resource
> #	will be used as the "service" string in the extension.
> #
> #	The contents of the template will be active on every member not running
> #	the service at the time. 
> #
> #	This template specifies	periodically scheduled processing for a user
> #	on members that do not run the service at the time. 
> #	The file may of course contain no commands, but it should exist.
> #
> #	Of course both of those templates should be in the standard crontab(1)
> #	format.
> #
> #	Notes and assumptions:
> #
> #	1.
> #	Please note that the above mechanism of crontab file management 
> #	assumes that a user is associated with only one service.
> #	More state would need to be kept if a user would need different
> #	processing depending on whether 0, 1, 2 or more services were 
> #	running on a node.
> #
> #	2.
> #	Please note that /var/spool/cron is a CDSL in the TCS cluster and thus
> #	all crontab files in /var/spool/cron/crontabs are node specific.
> #
> #	3.
> #	If a node dies suddenly and then reboots, then it will reboot
> #	with a set of crontabs that may not reflect the current state
> #	of services on the node after reboot.
> #	In fact the node will have all the crontabs from the moment it
> #	crashed augmented by changes caused by any sevices restarted
> #	on it after its reboot.
> #
> #	What is really needed is another script - run on boot from
> #	/sbin/rc2.d directory that will install correct initial, inactive
> #	(*.service_off) versions of the crontabs on boot.
> #
> #	4. 
> #	The crontab templates must be readable by the user for whom
> #	they are to be installed.
> 
> 
> 
> -----Original Message-----
> From: linux-cluster-bounces@xxxxxxxxxx [mailto:linux-cluster-bounces@xxxxxxxxxx] On Behalf Of Joseph L. Casale
> Sent: Wednesday, 31 March 2010 09:42
> To: 'linux clustering'
> Subject: Re:  Cron Jobs
> 
> >1.
> >What about replacing the directory containing the cron job descriptions in /var with a symbolic link to a directory on the sahred filesystem.
> 
> yeah, my first inkling was to symlink /etc/cron.daily but that breaks so much existing functionality.
> 
> >2.
> >You application service start/stop script may modify the cron job description files.  This is more complex, as it has to deal with remote nodes that may be >down.
> 
> I followed you until you spoke of remote nodes? What exactly do you have in mind?
> Thanks!
> jlc
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
> 
> --
> Linux-cluster mailing list
> Linux-cluster@xxxxxxxxxx
> https://www.redhat.com/mailman/listinfo/linux-cluster
> 

This message and attachment(s) are intended solely for use by the addressee and may contain information that is privileged, confidential or otherwise exempt from disclosure under applicable law.

If you are not the intended recipient or agent thereof responsible for delivering this message to the intended recipient, you are hereby notified that any dissemination, distribution or copying of this communication is strictly prohibited.

If you have received this communication in error, please notify the sender immediately by telephone and with a 'reply' message.

Thank you for your co-operation.



--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster


[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux