Problem with SAN after migrating to RH cluster suite

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



Hello all,

I'm relatively new to Linux, so forgive me if this question seems off.

We recently moved from a cluster running RHEL 4/Veritas to a new cluster
running RHEL 4/Red Hat Cluster Suite.  In both cases, a SAN was
involved.

After migrating, we see a considerable increase in the time it takes to
mount the SAN.  Some of our init.d scripts fail because the SAN is not
up yet.  Our admin tried changing run levels to make the scripts run
later, but this doesn't help.  One can even log in via SSH shortly after
boot and the SAN is not yet mounted.  Could this be normal behavior?
When a service needs access to files on the SAN should it be started by
some cluster mechanism?  Or should we be looking for some underlying
problem?

Incidentally, the files on the SAN are not config files, they are data.
All config files are on local disk.

Thanks for any help,

B
**************************************************************************** 
This email may contain confidential material. 
If you were not an intended recipient, 
Please notify the sender and delete all copies. 
We may monitor email to and from our network. 
****************************************************************************

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux