Chris - How many other VM's do you have too? Are the RH servers local disk or shared storage, what about all the others? Josh On 10/31/07 9:27 AM, "Christopher Barry" <Christopher.Barry@xxxxxxxxxx> wrote: > Greetings all, > > I have 2 vmware esx servers, each hitting a NetApp over FS, and each > with 3 RHCS cluster nodes trying to mount a gfs volume. > > All of the nodes (1,2,& 3) on esx-01 can mount the volume fine, but none > of the nodes in the second esx box can mount the gfs volume at all, and > I get the following error in dmesg: > > Lock_Harness 2.6.9-72.2 (built Apr 24 2007 12:45:38) installed > GFS 2.6.9-72.2 (built Apr 24 2007 12:45:54) installed > GFS: Trying to join cluster "lock_dlm", "kop-sds:gfs_home" > Lock_DLM (built Apr 24 2007 12:45:40) installed > GFS: fsid=kop-sds:gfs_home.2: Joined cluster. Now mounting FS... > GFS: fsid=kop-sds:gfs_home.2: jid=2: Trying to acquire journal lock... > GFS: fsid=kop-sds:gfs_home.2: jid=2: Looking at journal... > GFS: fsid=kop-sds:gfs_home.2: jid=2: Done > scsi2 (0,0,0) : reservation conflict > SCSI error : <2 0 0 0> return code = 0x18 > end_request: I/O error, dev sdc, sector 523720263 > scsi2 (0,0,0) : reservation conflict > SCSI error : <2 0 0 0> return code = 0x18 > end_request: I/O error, dev sdc, sector 523720271 > scsi2 (0,0,0) : reservation conflict > SCSI error : <2 0 0 0> return code = 0x18 > end_request: I/O error, dev sdc, sector 523720279 > GFS: fsid=kop-sds:gfs_home.2: fatal: I/O error > GFS: fsid=kop-sds:gfs_home.2: block = 65464979 > GFS: fsid=kop-sds:gfs_home.2: function = gfs_logbh_wait > GFS: fsid=kop-sds:gfs_home.2: file > = /builddir/build/BUILD/gfs-kernel-2.6.9-72/smp/src/gfs/dio.c, line = > 923 > GFS: fsid=kop-sds:gfs_home.2: time = 1193838678 > GFS: fsid=kop-sds:gfs_home.2: about to withdraw from the cluster > GFS: fsid=kop-sds:gfs_home.2: waiting for outstanding I/O > GFS: fsid=kop-sds:gfs_home.2: telling LM to withdraw > lock_dlm: withdraw abandoned memory > GFS: fsid=kop-sds:gfs_home.2: withdrawn > GFS: fsid=kop-sds:gfs_home.2: can't get resource index inode: -5 > > > Does anyone have a clue as to where I should start looking? > > > Thanks, > -C > > -- > Linux-cluster mailing list > Linux-cluster@xxxxxxxxxx > https://www.redhat.com/mailman/listinfo/linux-cluster > -- Josh Gray Systems Administrator NIC Inc Email: jgray@xxxxxxxxxx Desk/Mobile: 913-221-1520 "It is not the mountain we conquer, but ourselves." - Sir Edmund Hillary -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster