Re: GFS continue to reboot nodes

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

 



root#mkfs -t gfs2 -p lock_dlm -t db_clust:db_store -j 4 /dev/vg1_gfs/db_store

root#mkfs -t gfs2 -p lock_dlm -t db_clust:db_store -j 3 /dev/vg1_gfs/db_store
Regards,

Rajat J Patel
D 803 Royal Classic
Link Road
Andheri West
Mumbai 53
+919920121211
www.taashee.com

FIRST THEY IGNORE YOU...
THEN THEY LAUGH AT YOU...
THEN THEY FIGHT YOU...
THEN YOU WIN...


On Mon, Jan 25, 2010 at 6:24 PM, Muhammad Ammad Shah <mammadshah@xxxxxxxxxxx> wrote:

Dear Rajat,

HI,

I have configured two node cluster and its working fine for SAN (ext3 file system). after this i configured GFS using following.

root# pvcreate /dev/sdb
root#vgcreate -c y vg1_gfs /dev/sdc1
root#lvcreate -n db_store -l 100%FREE vg1_gfs
root#/etc/init.d/clvmd start

Started on both nodes.

root#mkfs -t gfs2 -p lock_dlm -t db_clust:db_store -j 4 /dev/vg1_gfs/db_store
root# service gfs start

root#chkconfig --level 345 clvmd on
root#chkconfig --level 345 gfs on

----------------
the problem is, as i changed File system (ex3) resource to GFS Resource.

nodes are rebooting..

there is nothing in /var/log/messages. but when i checked console of the node there was some message related to GFS.
DLM id:0 ...

so i removed GFS and switched back to File system(ext3) resource.

can i install oracle on Resource File system(ext3) ?

or how to troubleshoot GFS reboot..
need help,




 
Thanks,
Muhammad Ammad Shah

--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster

[Index of Archives]     [Corosync Cluster Engine]     [GFS]     [Linux Virtualization]     [Centos Virtualization]     [Centos]     [Linux RAID]     [Fedora Users]     [Fedora SELinux]     [Big List of Linux Books]     [Yosemite Camping]

  Powered by Linux