Hello, Just thought I’d plug in some more
info I’ve also tried testing this with a stock FC4 client with all the Cluster
RPMs installed. GFS: fsid=sclients:mygfs.0: jid=17: Trying to acquire journal lock... GFS: fsid=sclients:mygfs.0: jid=17: Looking at journal... GFS: fsid=sclients:mygfs.0: jid=17: Done GFS: fsid=sclients:mygfs.0: jid=18: Trying to acquire journal lock... GFS: fsid=sclients:mygfs.0: jid=18: Looking at journal... GFS: fsid=sclients:mygfs.0: jid=18: Done GFS: fsid=sclients:mygfs.0: jid=19: Trying to acquire journal lock... GFS: fsid=sclients:mygfs.0: jid=19: Looking at journal... GFS: fsid=sclients:mygfs.0: jid=19: Done GFS: fsid=sclients:mygfs.0: jid=20: Trying to acquire journal lock... GFS: fsid=sclients:mygfs.0: jid=20: Looking at journal... attempt to access beyond end of device dm-0: rw=0, want=1146990600, limit=1146990592 GFS: fsid=sclients:mygfs.0: fatal: I/O error GFS: fsid=sclients:mygfs.0: block = 143373824 GFS: fsid=sclients:mygfs.0: function = gfs_dreread GFS: fsid=sclients:mygfs.0: file =
/usr/src/build/588748-i686/BUILD/xen0/src/gfs/dio.c, line = 576 GFS: fsid=sclients:mygfs.0: time = 1123268277 GFS: fsid=sclients:mygfs.0: about to withdraw from the cluster GFS: fsid=sclients:mygfs.0: waiting for outstanding I/O GFS: fsid=sclients:mygfs.0: telling LM to withdraw lock_dlm: withdraw abandoned memory GFS: fsid=sclients:mygfs.0: withdrawn GFS: fsid=sclients:mygfs.0: jid=20: Failed GFS: fsid=sclients:mygfs.0: error recovering journal 20: -5 [root@5n@k3bi73 ~]# Aug 5 11:57:57 5n@k3bi73 kernel: GFS:
fsid=sclients:mygfs.0: time = 1123268277 Aug 5 11:57:57 5n@k3bi73 kernel: GFS: fsid=sclients:mygfs.0:
about to withdraw from the cluster Aug 5 11:57:57 5n@k3bi73 kernel: GFS: fsid=sclients:mygfs.0:
waiting for outstanding I/O Aug 5 11:57:57 5n@k3bi73 kernel: GFS: fsid=sclients:mygfs.0:
telling LM to withdraw Aug 5 11:57:57 5n@k3bi73 kernel: lock_dlm: withdraw abandoned
memory Aug 5 11:57:57 5n@k3bi73 kernel: GFS: fsid=sclients:mygfs.0: withdrawn Aug 5 11:57:57 5n@k3bi73 kernel: GFS: fsid=sclients:mygfs.0:
jid=20: Failed This off a multipathed device which
dmsetup status gives the output below: dm-1: 0 1146990592 multipath 1 0 0 2 1 A 0 1 0 251:0
A 0 E 0 1 0 251:4 A 0 dmsetup deps gives dm-1: 2 dependencies : (251, 4)
(251, 0) and dmsetup info gives Name:
dm-1 State:
ACTIVE Tables
present: LIVE Open
count: 0 Event
number: 0 Major,
minor: 253, 1 Number of targets: 1 From: brianu
[mailto:brianu@xxxxxxxxxxxxxx] Hello, Ok I figured out id just try
some of the vaules from the previous post without fully understanding them, and
multipath appears to be working. dm-1 [size=546
GB][features="0"][hwhandler="0"] \_ round-robin 0
[active][first] \_
0:0:0:0 251:0 [undef ][active] \_ round-robin 0 [enabled] \_
0:0:0:0 251:4 [undef ][active] But I stil get an error [root@dell-1650-31 ~]# mount
-t gfs /dev/mapper/dm-1 /mnt/gfs1 mount: /dev/dm-1: can't read superblock if I do a dmsetup remove dm-1, then mount the
individual gnbds all is well, but the purpose of this is to enable some sore of
failover which I am told GNBD has the capability of doing. From redhats main site and documentation for gfs 6.1 they
state that multipath is not supported in the 6.1 realease however I optained
this source from CVS and the main docs for http://sources.redhat.com/cluster/gnbd/
state that multipath is an option. Can someone clarify whether the CVS
stabile sources for kernel-2.6.12 is multipath compatable, or am I doing
something wrong? Current specs. SAN -> MSA-1000 3 GNBD servers currently using software iSCSI to mount that
SAN – will prob go fiber if I can figure this out. ( lets say this
cluster is called cluster1) Using DLM & GNBD 1 client for testing separate cluster name lets say
“cluster2” Client mounted the gnbd from one of the servers that is exporting
it, the servers are not mounting it, then formatted the device with gfs &
created 20 journals size of 32MB each, remounted the device and verified write
and read (bonnie++) Ran dmsetup to round robin the devices then failed to mount
the volume as shown above. Brian Urrutia System Administrator Price Communications Inc. |
-- Linux-cluster@xxxxxxxxxx http://www.redhat.com/mailman/listinfo/linux-cluster