gfs and cluster nodes rebooting

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 




HI,

I have configured two node cluster and its working fine for SAN (ext3 file system). after this i configured GFS using following.

root# pvcreate /dev/sdb
root#vgcreate -c y vg1_gfs /dev/sdc1
root#lvcreate -n db_store -l 100%FREE vg1_gfs
root#/etc/init.d/clvmd start

Started on both nodes.

root#mkfs -t gfs2 -p lock_dlm -t db_clust:db_store -j 4 /dev/vg1_gfs/db_store
root# service gfs start

root#chkconfig --level 345 clvmd on
root#chkconfig --level 345 gfs on

----------------
the problem is, as i changed File system (ex3) resource to GFS Resource.

nodes are rebooting..

there is nothing in /var/log/messages. but when i checked console of the node there was some message related to GFS.
DLM id:0 ...

so i removed GFS and switched back to File system(ext3) resource.

can i install oracle on Resource File system(ext3) ?

or how to troubleshoot GFS reboot..

need help,

 
Thanks,
Muhammad Ammad Shah
 




Windows Live: Make it easier for your friends to see what you’re up to on Facebook.
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux