RE: Problem with SAN after migrating to RH cluster suite

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Robert,

Thanks for replying.  In particular, the script for the Perforce server
(p4d) is failing with a 'no such file or directory' error.  However, if
one waits 30 seconds and runs /etc/init.d/p4d, it works.  We made sure
that this is not an application problem by changing /etc/init.d/p4d to
include other commands like 'ls' and 'touch' which also report that the
directory on the SAN is not available.

I ran the 'chkconfig' command you suggested and it produced

ccsd            0:off   1:off   2:on    3:on    4:on    5:on    6:off
fenced          0:off   1:off   2:on    3:on    4:on    5:on    6:off
cman            0:off   1:off   2:on    3:on    4:on    5:on    6:off

Does that shed any light?

Thanks for your help!

Brian Hartin


-----Original Message-----
From: Robert Peterson [mailto:rpeterso@xxxxxxxxxx] 
Sent: Monday, March 19, 2007 12:42 PM
To: linux clustering
Cc: Hartin, Brian; Seeley, Joiey
Subject: Re:  Problem with SAN after migrating to RH
cluster suite

Hartin, Brian wrote:
> Hello all,
> 
> I'm relatively new to Linux, so forgive me if this question seems off.
> 
> We recently moved from a cluster running RHEL 4/Veritas to a new
cluster
> running RHEL 4/Red Hat Cluster Suite.  In both cases, a SAN was
> involved.
> 
> After migrating, we see a considerable increase in the time it takes
to
> mount the SAN.  Some of our init.d scripts fail because the SAN is not
> up yet.  Our admin tried changing run levels to make the scripts run
> later, but this doesn't help.  One can even log in via SSH shortly
after
> boot and the SAN is not yet mounted.  Could this be normal behavior?
> When a service needs access to files on the SAN should it be started
by
> some cluster mechanism?  Or should we be looking for some underlying
> problem?
> 
> Incidentally, the files on the SAN are not config files, they are
data.
> All config files are on local disk.
> 
> Thanks for any help,
> 
> B
Hi Brian,

I'm not quite sure I understand what the problem is.  Under ordinary
circumstances, there should be no extra time required as far as I know.
If you have all of your cluster startup scripts in place and chkconfiged
on, then I think you should be able to mount immediately.
What init scripts are failing because of the SAN and what do they say
when they fail?

In theory, you should be able to have all of these init scripts turned 
"on" so they run at init time:

ccsd
cman
fenced

If you have GFS mount points in your /etc/inittab, you may also want to
enable:

gfs

(You can also check rgmanager if you're using rgmanager failover
services
for High Availability).

You can check these by doing this command:

chkconfig --list | grep "ccsd\|cman\|fenced\|gfs"

So a gfs file system should be able to mount when the system is booting.
I don't recommend messing with the order of the scripts though.
I hope this helps.

Regards,

Bob Peterson
Red Hat Cluster Suite
**************************************************************************** 
This email may contain confidential material. 
If you were not an intended recipient, 
Please notify the sender and delete all copies. 
We may monitor email to and from our network. 
****************************************************************************

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux