Thanks for the help. I have removed sg3_utils package, and rebooted all the nodes. As well I removed any SCSI fencing entry from cluster.conf file.
I still have a problem getting GFS up on one of the nodes. I checked the chkconfig and made sure scsi_reserve is off.
This is the output of service gfs start - it hangs (cman and clvmd work just fine):
[root@fendev04 ~]# service gfs start
Mounting GFS filesystems:
From the /var/log/messages:
Feb 18 16:25:46 fendev04 kernel: dlm: account61: group leave failed -512 0
Feb 18 16:25:46 fendev04 dlm_controld[29545]: open "/sys/kernel/dlm/account61/control" error -1 2
Feb 18 16:25:46 fendev04 kernel: GFS: fsid=test1_cluster:account61.0: withdrawn
Feb 18 16:25:46 fendev04 kernel: [<f911fb3e>] gfs_lm_withdraw+0x76/0x82 [gfs]
Feb 18 16:25:46 fendev04 kernel: [<f9135db6>] gfs_io_error_bh_i+0x2c/0x31 [gfs]
Feb 18 16:25:46 fendev04 dlm_controld[29545]: open "/sys/kernel/dlm/account61/event_done" error -1 2
Feb 18 16:25:46 fendev04 kernel: [<f910ed07>] gfs_logbh_wait+0x43/0x62 [gfs]
Feb 18 16:25:46 fendev04 kernel: [<f91227a1>] disk_commit+0x4a6/0x69a [gfs]
Feb 18 16:25:46 fendev04 kernel: [<f9122f6c>] gfs_log_dump+0x2aa/0x364 [gfs]
Feb 18 16:25:46 fendev04 kernel: [<f9134354>] gfs_make_fs_rw+0xeb/0x113 [gfs]
Feb 18 16:25:46 fendev04 kernel: [<f9129fd4>] init_journal+0x230/0x2fe [gfs]
Feb 18 16:25:46 fendev04 kernel: [<f912a928>] fill_super+0x402/0x576 [gfs]
Feb 18 16:25:46 fendev04 kernel: [<c04787fa>] get_sb_bdev+0xc6/0x110
Feb 18 16:25:46 fendev04 gfs_controld[29551]: mount_client_dead ci 8 no sysfs entry for fs
Feb 18 16:25:46 fendev04 dlm_controld[29545]: open "/sys/kernel/dlm/gfs_web/control" error -1 2
Feb 18 16:25:46 fendev04 kernel: [<c045af57>] __alloc_pages+0x57/0x297
Feb 18 16:25:46 fendev04 gfs_controld[29551]: mount_client_dead ci 6 no sysfs entry for fs
Feb 18 16:25:46 fendev04 dlm_controld[29545]: open "/sys/kernel/dlm/gfs_web/event_done" error -1 2
Feb 18 16:25:46 fendev04 kernel: [<f9129be1>] gfs_get_sb+0x12/0x16 [gfs]
Feb 18 16:25:46 fendev04 gfs_controld[29551]: mount_client_dead ci 7 no sysfs entry for fs
Feb 18 16:25:46 fendev04 dlm_controld[29545]: open "/sys/kernel/dlm/cati_gfs/id" error -1 2
Feb 18 16:25:46 fendev04 kernel: [<f912a526>] fill_super+0x0/0x576 [gfs]
Feb 18 16:25:46 fendev04 dlm_controld[29545]: open "/sys/kernel/dlm/cati_gfs/control" error -1 2
Feb 18 16:25:46 fendev04 kernel: [<c04782bf>] vfs_kern_mount+0x7d/0xf2
Feb 18 16:25:46 fendev04 dlm_controld[29545]: open "/sys/kernel/dlm/cati_gfs/event_done" error -1 2
Feb 18 16:25:46 fendev04 kernel: [<c0478366>] do_kern_mount+0x25/0x36
Feb 18 16:25:46 fendev04 kernel: [<c048b381>] do_mount+0x5f5/0x665
Feb 18 16:25:46 fendev04 kernel: [<c0434627>] autoremove_wake_function+0x0/0x2d
Feb 18 16:25:46 fendev04 kernel: [<c05aaf08>] do_sock_read+0xae/0xb7
Feb 18 16:25:46 fendev04 kernel: [<c05ab49b>] sock_aio_read+0x53/0x61
Feb 18 16:25:46 fendev04 kernel: [<c05aee2d>] sock_def_readable+0x31/0x5b
Feb 18 16:25:47 fendev04 kernel: [<c045ac63>] get_page_from_freelist+0x96/0x333
Feb 18 16:25:47 fendev04 kernel: [<c048a273>] copy_mount_options+0x26/0x109
Feb 18 16:25:47 fendev04 kernel: [<c048b45e>] sys_mount+0x6d/0xa5
Feb 18 16:25:47 fendev04 kernel: [<c0404f17>] syscall_call+0x7/0xb
Feb 18 16:25:47 fendev04 kernel: =======================
Feb 18 16:25:47 fendev04 kernel: dlm: gfs_web: group join failed -512 0
Feb 18 16:25:47 fendev04 kernel: lock_dlm: dlm_new_lockspace error -512
Feb 18 16:25:47 fendev04 kernel: can't mount proto=lock_dlm, table=test1_cluster:gfs_web, hostdata=jid=0:id=327682:first=1
Feb 18 16:25:47 fendev04 kernel: dlm: cati_gfs: group join failed -512 0
Feb 18 16:25:47 fendev04 kernel: lock_dlm: dlm_new_lockspace error -512
Feb 18 16:25:47 fendev04 kernel: can't mount proto=lock_dlm, table=test1_cluster:cati_gfs, hostdata=jid=0:id=393218:first=1
On Tue, Feb 17, 2009 at 5:37 PM, Ryan O'Hara <rohara@xxxxxxxxxx> wrote:
On Tue, Feb 17, 2009 at 04:23:20PM -0700, Gary Romo wrote:Ahh, yes. If you don't intend to use SCSI-3 reservations, you
>
> We had this issue a long time ago.
> What we did was remove the sg3_utils rpm and then did a chkconfig
> scsi_reserve off
definitely need to turn off scsi_reserve.
Thanks for pointing this out, Gary.
--
Linux-cluster mailing list
Linux-cluster@xxxxxxxxxx
https://www.redhat.com/mailman/listinfo/linux-cluster
--
Alan A.
-- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster