Have you config your multipath or your using /dev/sda from your san
This happens because one node of the cluster has two interfaces on the same network segment with an IP in the same subnet. This node sends out cluster messages by the wrong source IP instead of the IP defined in the /etc/cluster/cluster.conf.
To solve the issue, just need shutdown the IP that is not defined in /etc/cluster/cluster.conf.
Regards,
Rajat J Patel
D 803 Royal Classic
Link Road
Andheri West
Mumbai 53
+919920121211
www.taashee.com
FIRST THEY IGNORE YOU...
THEN THEY LAUGH AT YOU...
THEN THEY FIGHT YOU...
THEN YOU WIN...
On Mon, Jan 25, 2010 at 6:24 PM, Muhammad Ammad Shah <mammadshah@xxxxxxxxxxx> wrote:
Dear Rajat,
HI,
I have configured two node cluster and its working fine for SAN (ext3 file system). after this i configured GFS using following.
root# pvcreate /dev/sdb
root#vgcreate -c y vg1_gfs /dev/sdc1
root#lvcreate -n db_store -l 100%FREE vg1_gfs
root#/etc/init.d/clvmd start
Started on both nodes.
root#mkfs -t gfs2 -p lock_dlm -t db_clust:db_store -j 4 /dev/vg1_gfs/db_store
root# service gfs start
root#chkconfig --level 345 clvmd on
root#chkconfig --level 345 gfs on
----------------
the problem is, as i changed File system (ex3) resource to GFS Resource.
nodes are rebooting..
there is nothing in /var/log/messages. but when i checked console of the node there was some message related to GFS.
DLM id:0 ...
so i removed GFS and switched back to File system(ext3) resource.
can i install oracle on Resource File system(ext3) ?
or how to troubleshoot GFS reboot..
need help,
Thanks,
Muhammad Ammad Shah
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster