hallo I have trouble mounting GNBD inported gfs on both nodes of my test clusuer. If the lock is set to "lock_nolock" it mounts fine but this is not what i want. When I use lock_dlm I get: mount: wrong fs type, bad option, bad superblock on /dev/gnbd/global_disk, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so What I am doing wrong? Total output follows (Selinux is NOT in enforcing mode): [root@node2 ~]# modprobe gnbd [root@node2 ~]# modprobe gfs2 [root@node2 ~]# modprobe gfs [root@node2 ~]# modprobe lock_dlm [root@node2 ~]# gnbd_import -n -i 192.168.0.60 gnbd_import: created directory /dev/gnbd gnbd_import: created gnbd device global_disk gnbd_recvd: gnbd_recvd started [root@node2 ~]# cd /etc/init.d/ [root@node2 init.d]# ./cman start Starting cluster: Loading modules... done Mounting configfs... done Starting ccsd... done Starting cman... done Starting daemons... done Starting fencing... done [ OK ] [root@node2 ~]# gfs_mkfs -p lock_dlm -t testc:gfs1 -j6 /dev/gnbd/global_disk This will destroy any data on /dev/gnbd/global_disk. It appears to contain a gfs filesystem. Are you sure you want to proceed? [y/n] y Device: /dev/gnbd/global_disk Blocksize: 4096 Filesystem Size: 851880 Journals: 6 Resource Groups: 14 Locking Protocol: lock_dlm Lock Table: testc:gfs1 Syncing... All Done [root@node2 ~]# mount -t gfs /dev/gnbd/global_disk /mnt mount: wrong fs type, bad option, bad superblock on /dev/gnbd/global_disk, missing codepage or other error In some cases useful info is found in syslog - try dmesg | tail or so [root@node2 ~]# dmesg |tail GFS: fsid=testc:gfs1.0: Scanning for log elements... GFS: fsid=testc:gfs1.0: Found 0 unlinked inodes GFS: fsid=testc:gfs1.0: Found quota changes for 0 IDs GFS: fsid=testc:gfs1.0: Done SELinux: initialized (dev gnbd0, type gfs), uses xattr audit(1184744195.259:4): avc: denied { getattr } for pid=1848 comm="hald" name="global_disk" dev=tmpfs ino=19253 scontext=system_u:system_r:hald_t:s0 tcontext=root:object_r:device_t:s0 tclass=blk_file Trying to join cluster "lock_dlm", "testc:gfs1" Joined cluster. Now mounting FS... GFS: fsid=testc:gfs1.4294967295: can't mount journal #4294967295 GFS: fsid=testc:gfs1.4294967295: there are only 6 journals (0 - 5) [root@node2 ~]# ____________________________________________________________________________________ Now that's room service! Choose from over 150,000 hotels in 45,000 destinations on Yahoo! Travel to find your fit. http://farechase.yahoo.com/promo-generic-14795097 -- Linux-cluster mailing list Linux-cluster@xxxxxxxxxx https://www.redhat.com/mailman/listinfo/linux-cluster