Hi, I
am still feeling my way around RHCS, and today I added a dummy fencing script
just to see how fencing agents are called. This
script just says hello to syslog, and dumps the environment to a file in /tmp,
and returns "Done" and a success code to fenced: cat fence_ack_null #! /usr/bin/perl -w # # Fencing agent that always succeeds # use Sys::Syslog; my $dump = "/tmp/fence_ack_null.dat"; open( OUT, ">>$dump" ) || die
"Failed: $!"; print OUT "=" x 80, "\n",
scalar(localtime ), "\n"; foreach my $a ( sort keys %ENV ) { print OUT "$a => $ENV{$a}\n"; } print OUT "fence_ack_null @ARGV\n"; close(OUT); openlog( 'fence_ack_null', 'cons,pid', 'user' ); syslog( 'info', 'fence_ack_null called'); closelog(); print "Done\n"; exit 0; In
the dump of the environment, I was surprised to find items from my environment
when I started the cluster: ================================================================================ Tue May 18 09:40:24 2010 HOME => /home/martin HOSTNAME => vm-031-rhel64-mw LANG => en_GB LOGNAME => root LS_COLORS =>
no=00:fi=00:di=00;34:ln=00;36:pi=40;33:so=00;35:bd=40;33;01:cd=40;33;01:or=01;05;37;41:mi=01;05;37;41:ex=00;32:*.cmd=00;32:*.exe=00;32:*.com=00;32:*.btm=00;32:*.bat=00;32:*.sh=00;32:*.csh=00;32:*.tar=00;31:*.tgz=00;31:*.arj=00;31:*.taz=00;31:*.lzh=00;31:*.zip=00;31:*.z=00;31:*.Z=00;31:*.gz=00;31:*.bz2=00;31:*.bz=00;31:*.tz=00;31:*.rpm=00;31:*.cpio=00;31:*.jpg=00;35:*.gif=00;35:*.bmp=00;35:*.xbm=00;35:*.xpm=00;35:*.png=00;35:*.tif=00;35: MAIL => /var/spool/mail/martin PATH => /sbin:/usr/sbin:/bin:/usr/bin PWD => /home/martin SHELL => /bin/bash SHLVL => 1 SUDO_COMMAND => /etc/init.d/cman start SUDO_GID => 1003 SUDO_UID => 1003 SUDO_USER => martin TERM => xterm USER => root USERNAME => root _ => /sbin/fenced fence_ack_null To my mind, this is a bug in the cluster system. I
would expect a daemon process to sanitise its environment to prevent unexpected
side effects from creeping in. Checking /proc/<pid>/environ for the cluster daemons,
the following are affected: ccsd groupd fenced dlm_controld gfs_controld clurgmgrd Which raises a question: is there a bug tracking
system for linux-clustering where I can record this issue ? regards, Martin |
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster